Accurate and automatic organ segmentation from 3D radiological scans is animportant yet challenging problem for medical image analysis. Specifically, thepancreas demonstrates very high inter-patient anatomical variability in bothits shape and volume. In this paper, we present an automated system using 3Dcomputed tomography (CT) volumes via a two-stage cascaded approach: pancreaslocalization and segmentation. For the first step, we localize the pancreasfrom the entire 3D CT scan, providing a reliable bounding box for the morerefined segmentation step. We introduce a fully deep-learning approach, basedon an efficient application of holistically-nested convolutional networks(HNNs) on the three orthogonal axial, sagittal, and coronal views. Theresulting HNN per-pixel probability maps are then fused using pooling toreliably produce a 3D bounding box of the pancreas that maximizes the recall.We show that our introduced localizer compares favorably to both a conventionalnon-deep-learning method and a recent hybrid approach based on spatialaggregation of superpixels using random forest classification. The second,segmentation, phase operates within the computed bounding box and integratessemantic mid-level cues of deeply-learned organ interior and boundary maps,obtained by two additional and separate realizations of HNNs. By integratingthese two mid-level cues, our method is capable of generatingboundary-preserving pixel-wise class label maps that result in the finalpancreas segmentation. Quantitative evaluation is performed on a publiclyavailable dataset of 82 patient CT scans using 4-fold cross-validation (CV). Weachieve a Dice similarity coefficient (DSC) of 81.27+/-6.27% in validation,which significantly outperforms previous state-of-the art methods that reportDSCs of 71.80+/-10.70% and 78.01+/-8.20%, respectively, using the same dataset.
展开▼